12. Lab: Feature Extraction
Feature Extraction
The problem is that AlexNet was trained on the ImageNet database, which has 1000 classes of images. You can see the classes in the caffe_classes.py file. None of those classes involves traffic signs.
In order to successfully classify our traffic sign images, you need to remove the final, 1000-neuron classification layer and replace it with a new, 43-neuron classification layer.
This is called feature extraction, because you're basically extracting the image features inferred by the penultimate layer, and passing these features to a new classification layer.
Open feature_extraction.py and complete the TODO(s).
Your output will probably not precisely match the sample output below, since the output will depend on the (probably random) initialization of weights in the network. That being said, the output classes you see should be present in signnames.csv.
Image 0
Double curve: 0.059
Ahead only: 0.048
Road work: 0.047
Dangerous curve to the right: 0.047
Road narrows on the right: 0.039
Image 1
General caution: 0.079
No entry: 0.067
Dangerous curve to the right: 0.054
Speed limit (50km/h): 0.053
Ahead only: 0.048
Time: 0.500 seconds